Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Make the ReadRows service in tests more modular #1462

Merged
merged 10 commits into from
Sep 23, 2024

Conversation

danieljbruce
Copy link
Contributor

Summary

This PR makes the ReadRows service in tests more modular.

Background

This PR is made up of 4 already reviewed PRs merged onto this feature branch that are now being merged onto main.

  1. this PR eliminates duplicate code and adds interfaces to simplify data flows
  2. this PR eliminated the duplicate copy of the ReadRows service
  3. this PR gets rid of a complex block of if statements
  4. this PR breaks the ReadRow service down into classes.

danieljbruce and others added 4 commits July 23, 2024 16:18
* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Replace with as ServerImplementationInterface

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Fix a regression bug from the merge

---------

Co-authored-by: Leah E. Cole <[email protected]>
…sion of the first ReadRows (#1457)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Delete file

* Change splitted to split

* Remove export

* Don’t rename the interfaces

There is no point

* Increase Windows timeout

* Provide functions to eliminate if statements

* Eliminate the if statements

Make a direct call to generateChunksFromRequest

* Revert "Eliminate the if statements"

This reverts commit 0996e89.

* Revert "Provide functions to eliminate if statements"

This reverts commit 4a4761f.

* Change any’s to string | undefined

* Eliminate duplicate code for setting timeouts

* remove only

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

---------

Co-authored-by: Leah E. Cole <[email protected]>
…educing the size of it significantly (#1460)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Group the property getter into a function

* Group key selection into functions

* Fix typo: default key

* Documentation for isInRowSet

* Eliminate variable - move function inline

* Omit optional selector on stream

* Create ReadRowsWritableStream interface

* Use new interface, remove ServerWritableStream

* Don’t pass the whole stream into helpers

* Add a function for generating the chunks

* The debug log should be a pass through parameter

* Solve compiler errors resulting immediately from

merge

* Add debugLog parameter to various functions

* Add return type

* Remove exports - functions are only used in this

module

* Revise merge correction

* Remove TODO for the stream type

* Update the getKeyValue function

* Eliminate place where debug log is needed

* Run linter
…e long function (#1461)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Group the property getter into a function

* Group key selection into functions

* Fix typo: default key

* Documentation for isInRowSet

* Eliminate variable - move function inline

* Omit optional selector on stream

* Create ReadRowsWritableStream interface

* Use new interface, remove ServerWritableStream

* Don’t pass the whole stream into helpers

* Add a function for generating the chunks

* The debug log should be a pass through parameter

* Add some TODOs about how to address this next

* Pull send response into a class

* Create a dedicated class for defining the service

* Change name to readRowsRequestHandler

* Pull the function that generates the chunks into

Separate module

* Generate documentation

* Add more documentation to the Service class

* Add debugLog as a method parameter

* Solve compiler errors resulting immediately from

merge

* Add debugLog parameter to various functions

* Add return type

* Remove exports - functions are only used in this

module

* Revise merge correction

* Remove TODO for the stream type

* Update the getKeyValue function

* Eliminate place where debug log is needed

* Run linter

* Eliminate the function that creates a service

Add a factory method instead

* Add documentation for the constructor

* Remove the TODO

* Set the timeout for the test

* Increase the timeout
@danieljbruce danieljbruce requested review from a team as code owners August 19, 2024 14:44
@product-auto-label product-auto-label bot added size: l Pull request size is large. api: bigtable Issues related to the googleapis/nodejs-bigtable API. labels Aug 19, 2024
@danieljbruce danieljbruce added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Aug 19, 2024
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Aug 19, 2024
? defaultKey
: keyRequestClosed
? parseInt(keyRequestClosed as string)
: parseInt(keyRequestOpen as string) + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These combined ternary expressions are hard to read. Is this saying if the defaultKey is not undefined, then use default key, else use keyRequestClosed, and if that is undefined use it as opened, or else as closed? I think this might achieve the same thing and be easier to read, or if not maybe you can rewrite it in a way that preserves the logic:

return defaultKey || keyRequestClosed ? parseInt(keyRequestClosed as string) : parseInt(keyRequestOpen as string) + 1;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't do it quite like this because when defaultKey is 0 it is falsy so it doesn't get used, but we want it to get used when it is 0. What I did instead was did an equal check instead of a not equal check.

const stringKey = binaryKey.toString();
debugLog(`sent lastScannedRowKey = ${stringKey}`);
}
if (!canSendMore) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between lastScannedRowKey and canSendMore? Maybe a comment describing the branches would be useful. Also, is it possible to have a lastScannedRowKey and !canSendMore? Otherwise perhaps an else might be useful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From this, canSendMore is the value returned indicating whether or not we want the stream to drain. lastScannedRowKey is actually part of the response so I think both the if statements are independent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is pretty much an exact copy of what we have on main:

const canSendMore = stream.write(response);
if (response.chunks && response.chunks.length > 0) {
debugLog(`sent ${response.chunks.length} chunks`);
}
if (response.lastScannedRowKey) {
const binaryKey = Buffer.from(
response.lastScannedRowKey as string,
'base64'
);
const stringKey = binaryKey.toString();
debugLog(`sent lastScannedRowKey = ${stringKey}`);
}
if (!canSendMore) {
debugLog('awaiting for back pressure');
await new Promise<void>(resolve => {
stopWaiting = resolve;
stream.once('drain', resolve);
});
}
resolve();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added comments for the if blocks

@danieljbruce danieljbruce merged commit cf6906b into main Sep 23, 2024
18 of 22 checks passed
@danieljbruce danieljbruce deleted the 3527322442-to-main-2 branch September 23, 2024 20:06
kevkim-codes added a commit to kevkim-codes/nodejs-bigtable that referenced this pull request Sep 27, 2024
test: Make the ReadRows service in tests more modular (googleapis#1462)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the googleapis/nodejs-bigtable API. size: l Pull request size is large.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants